Jae-Deok Ha, Jinki Jung, Byungok Han, Kyusung Cho, H. Yang
In this paper, a new mobile Augmented Reality (AR) framework which is scalable to the number of objects being augmented is proposed. The scalability is achieved by a visual word recognition module on the remote server and a mobile phone which detects, tracks, and augments target objects with the received information from the server. The server and the mobile phone are connected through a conventional Wi-Fi. In the experiment, it takes 0.2 seconds for the cold start of an AR service initiation on a 10k object database, which is fairly acceptable in a real-world AR application.
{"title":"Mobile Augmented Reality using scalable recognition and tracking","authors":"Jae-Deok Ha, Jinki Jung, Byungok Han, Kyusung Cho, H. Yang","doi":"10.1109/VR.2011.5759473","DOIUrl":"https://doi.org/10.1109/VR.2011.5759473","url":null,"abstract":"In this paper, a new mobile Augmented Reality (AR) framework which is scalable to the number of objects being augmented is proposed. The scalability is achieved by a visual word recognition module on the remote server and a mobile phone which detects, tracks, and augments target objects with the received information from the server. The server and the mobile phone are connected through a conventional Wi-Fi. In the experiment, it takes 0.2 seconds for the cold start of an AR service initiation on a 10k object database, which is fairly acceptable in a real-world AR application.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"459 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133400954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the design, implementation and evaluation of an interactive virtual human Dr. Chestr: Computerized Host Encouraging Students to Review. Game show hosts exert a unique personality that becomes the trademark of their respective game shows. Our aim is to create virtual humans that can interact naturally and spontaneously using speech, emotions and gesture. Dr. Chestr is our virtual game show host that exhibits a personality designed to increase user engagement. Dr. Chestr is designed to test users with questions about the C++ programming language and allows the user to communicate using the most natural form of interaction, speech. We present the architecture and user evaluations of the Dr. Chestr Game Show.
{"title":"Virtual game show host — Dr. Chestr","authors":"R. Sakpal, D. Wilson","doi":"10.1109/VR.2011.5759486","DOIUrl":"https://doi.org/10.1109/VR.2011.5759486","url":null,"abstract":"This paper describes the design, implementation and evaluation of an interactive virtual human Dr. Chestr: Computerized Host Encouraging Students to Review. Game show hosts exert a unique personality that becomes the trademark of their respective game shows. Our aim is to create virtual humans that can interact naturally and spontaneously using speech, emotions and gesture. Dr. Chestr is our virtual game show host that exhibits a personality designed to increase user engagement. Dr. Chestr is designed to test users with questions about the C++ programming language and allows the user to communicate using the most natural form of interaction, speech. We present the architecture and user evaluations of the Dr. Chestr Game Show.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123008462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The present study investigates the effect of three types of sensory feedback (visual, auditory and passive haptic) in a context of two-handed interaction with graphical menus in virtual environments. Subjects controlled the position and orientation of a graphical menu using their non-dominant hand and interacted with menu items using their dominant index fingertip. An ISO 9241–9-based multi-tapping task and a sliding task were respectively used to evaluate subjects' performance in different feedback conditions. Adding passive haptic to visual feedback increased movement time and error rate, decreased throughput in the multi-tapping task, but outperformed visual only and visual-auditory feedback in the sliding task (in terms of movement time and number of times the contact between the finger and the pointer was lost). The results also showed that visual-auditory feedback, even if judged as useful by some subjects, decreased users' performance in the sliding task, as compared to visual-only feedback.
{"title":"Effects of sensory feedback while interacting with graphical menus in virtual environments","authors":"Nguyen-Thong Dang, Vincent Perrot, D. Mestre","doi":"10.1109/VR.2011.5759467","DOIUrl":"https://doi.org/10.1109/VR.2011.5759467","url":null,"abstract":"The present study investigates the effect of three types of sensory feedback (visual, auditory and passive haptic) in a context of two-handed interaction with graphical menus in virtual environments. Subjects controlled the position and orientation of a graphical menu using their non-dominant hand and interacted with menu items using their dominant index fingertip. An ISO 9241–9-based multi-tapping task and a sliding task were respectively used to evaluate subjects' performance in different feedback conditions. Adding passive haptic to visual feedback increased movement time and error rate, decreased throughput in the multi-tapping task, but outperformed visual only and visual-auditory feedback in the sliding task (in terms of movement time and number of times the contact between the finger and the pointer was lost). The results also showed that visual-auditory feedback, even if judged as useful by some subjects, decreased users' performance in the sliding task, as compared to visual-only feedback.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116795024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a recognition-driven navigation system for large-scale 3D virtual environments. The proposed system contains three parts, virtual environment reconstruction, feature database building and recognition-based navigation. The virtual environment is reconstructed automatically with LIDAR data and aerial images. The feature database is composed of image patches with features and registered location and orientation information. The database images are taken at different distances from the scenes with various viewing angles, and these images are then partitioned into smaller patches. When a user navigates the real world with a handheld camera, the captured image is used to estimate its location and orientation. These location and orientation information are also reflected in the virtual environment. With the proposed patch approach, the recognition is robust to large occlusions and can be done in real time. Experiments show that our proposed navigation system is efficient and well synchronized with real world navigation.
{"title":"Recognition-driven 3D navigation in large-scale virtual environments","authors":"Wei Guan, Suya You, U. Neumann","doi":"10.1109/VR.2011.5759439","DOIUrl":"https://doi.org/10.1109/VR.2011.5759439","url":null,"abstract":"We present a recognition-driven navigation system for large-scale 3D virtual environments. The proposed system contains three parts, virtual environment reconstruction, feature database building and recognition-based navigation. The virtual environment is reconstructed automatically with LIDAR data and aerial images. The feature database is composed of image patches with features and registered location and orientation information. The database images are taken at different distances from the scenes with various viewing angles, and these images are then partitioned into smaller patches. When a user navigates the real world with a handheld camera, the captured image is used to estimate its location and orientation. These location and orientation information are also reflected in the virtual environment. With the proposed patch approach, the recognition is robust to large occlusions and can be done in real time. Experiments show that our proposed navigation system is efficient and well synchronized with real world navigation.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115653722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new device has been developed for generating airflow field and odor-concentration distribution in a real environment for presenting to the user. This device is called a multi-sensorial field (MSF) display. When two fans are placed facing each other, the airflows generated by them collide with each other and are radially deflected on a plane perpendicular to the original airflow direction. By utilizing the deflected airflow, the MSF display can present the airflow blowing from the front to the user without placing fans in front of the user. The directivity of the airflow deflection can be controlled by placing nozzles on the fans to adjust the cross-sectional shape of the airflow jets coming from the fans. The MSF display can also generate odor-concentration distribution in a real environment by introducing odor vapors into the airflow generated by the fans. The user can freely move his/her head and sniff at various locations in the generated odor distribution. The results of preliminary sensory tests are presented to show the potential of the MSF display.
{"title":"Multi-sensorial field display: Presenting spatial distribution of airflow and odor","authors":"H. Matsukura, T. Nihei, H. Ishida","doi":"10.1109/VR.2011.5759448","DOIUrl":"https://doi.org/10.1109/VR.2011.5759448","url":null,"abstract":"A new device has been developed for generating airflow field and odor-concentration distribution in a real environment for presenting to the user. This device is called a multi-sensorial field (MSF) display. When two fans are placed facing each other, the airflows generated by them collide with each other and are radially deflected on a plane perpendicular to the original airflow direction. By utilizing the deflected airflow, the MSF display can present the airflow blowing from the front to the user without placing fans in front of the user. The directivity of the airflow deflection can be controlled by placing nozzles on the fans to adjust the cross-sectional shape of the airflow jets coming from the fans. The MSF display can also generate odor-concentration distribution in a real environment by introducing odor vapors into the airflow generated by the fans. The user can freely move his/her head and sniff at various locations in the generated odor distribution. The results of preliminary sensory tests are presented to show the potential of the MSF display.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114825212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motion platforms are advanced systems for driving simulators. Studies showed that these systems imitate the real driving behavior of cars very accurately. In low-cost driving simulators, most installations lack motion platforms and miss to simulate real motion forces. Their focus is on high-quality video and audio, or force feedback on steering wheels. We aim to substitute the real motion forces with low-cost actuators triggering the human extremities to create an extended immersion. By this, the quality of driving simulators without any motion platform can be increased. Our full body haptic display concept for low-cost racing car simulators is based on air cushion and pull mechanisms to support longitudinal and lateral forces addressing the human's mechanoreceptive and proprioceptive senses. The concept is analyzed within a user study covering twenty-two participants.
{"title":"Full body haptic display for low-cost racing car driving simulators","authors":"Adrian Steinemann, Sebastian Tschudi, A. Kunz","doi":"10.1109/VR.2011.5759490","DOIUrl":"https://doi.org/10.1109/VR.2011.5759490","url":null,"abstract":"Motion platforms are advanced systems for driving simulators. Studies showed that these systems imitate the real driving behavior of cars very accurately. In low-cost driving simulators, most installations lack motion platforms and miss to simulate real motion forces. Their focus is on high-quality video and audio, or force feedback on steering wheels. We aim to substitute the real motion forces with low-cost actuators triggering the human extremities to create an extended immersion. By this, the quality of driving simulators without any motion platform can be increased. Our full body haptic display concept for low-cost racing car simulators is based on air cushion and pull mechanisms to support longitudinal and lateral forces addressing the human's mechanoreceptive and proprioceptive senses. The concept is analyzed within a user study covering twenty-two participants.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131100466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this poster abstract we describe an experiment that measured depth judgments in optical see-through augmented reality at near-field distances of 34 to 50 centimeters. The experiment compared two depth judgment tasks: perceptual matching, a closed-loop task, and blind reaching, a visually open-loop task. The experiment tested each of these tasks in both a real-world environment and an augmented reality environment, and used a between-subjects design that included 40 participants. The experiment found that matching judgments were very accurate in the real world, with errors on the order of millimeters and very little variance. In contrast, matching judgments in augmented reality showed a linear trend of increasing overestimation with increasing distance, with a mean overestimation of ∼ 1 cm. With reaching judgments participants underestimated ∼ 4.5 cm in both augmented reality and the real world. We also discovered and solved a calibration problem that arises at near-field distances.
{"title":"Depth judgment tasks and environments in near-field augmented reality","authors":"Gurjot Singh, J. Swan, J. A. Jones, S. Ellis","doi":"10.1109/VR.2011.5759488","DOIUrl":"https://doi.org/10.1109/VR.2011.5759488","url":null,"abstract":"In this poster abstract we describe an experiment that measured depth judgments in optical see-through augmented reality at near-field distances of 34 to 50 centimeters. The experiment compared two depth judgment tasks: perceptual matching, a closed-loop task, and blind reaching, a visually open-loop task. The experiment tested each of these tasks in both a real-world environment and an augmented reality environment, and used a between-subjects design that included 40 participants. The experiment found that matching judgments were very accurate in the real world, with errors on the order of millimeters and very little variance. In contrast, matching judgments in augmented reality showed a linear trend of increasing overestimation with increasing distance, with a mean overestimation of ∼ 1 cm. With reaching judgments participants underestimated ∼ 4.5 cm in both augmented reality and the real world. We also discovered and solved a calibration problem that arises at near-field distances.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"44 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114126070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evan A. Suma, B. Lange, A. Rizzo, D. Krum, M. Bolas
The Flexible Action and Articulated Skeleton Toolkit (FAAST) is middleware to facilitate integration of full-body control with virtual reality applications and video games using OpenNI-compliant depth sensors (currently the PrimeSensor and the Microsoft Kinect). FAAST incorporates a VRPN server for streaming the user's skeleton joints over a network, which provides a convenient interface for custom virtual reality applications and games. This body pose information can be used for goals such as realistically puppeting a virtual avatar or controlling an on-screen mouse cursor. Additionally, the toolkit also provides a configurable input emulator that detects human actions and binds them to virtual mouse and keyboard commands, which are sent to the actively selected window. Thus, FAAST can enable natural interaction for existing off-the-shelf video games that were not explicitly developed to support input from motion sensors. The actions and input bindings are configurable at run-time, allowing the user to customize the controls and sensitivity to adjust for individual body types and preferences. In the future, we plan to substantially expand FAAST's action lexicon, provide support for recording and training custom gestures, and incorporate real-time head tracking using computer vision techniques.
{"title":"FAAST: The Flexible Action and Articulated Skeleton Toolkit","authors":"Evan A. Suma, B. Lange, A. Rizzo, D. Krum, M. Bolas","doi":"10.1109/VR.2011.5759491","DOIUrl":"https://doi.org/10.1109/VR.2011.5759491","url":null,"abstract":"The Flexible Action and Articulated Skeleton Toolkit (FAAST) is middleware to facilitate integration of full-body control with virtual reality applications and video games using OpenNI-compliant depth sensors (currently the PrimeSensor and the Microsoft Kinect). FAAST incorporates a VRPN server for streaming the user's skeleton joints over a network, which provides a convenient interface for custom virtual reality applications and games. This body pose information can be used for goals such as realistically puppeting a virtual avatar or controlling an on-screen mouse cursor. Additionally, the toolkit also provides a configurable input emulator that detects human actions and binds them to virtual mouse and keyboard commands, which are sent to the actively selected window. Thus, FAAST can enable natural interaction for existing off-the-shelf video games that were not explicitly developed to support input from motion sensors. The actions and input bindings are configurable at run-time, allowing the user to customize the controls and sensitivity to adjust for individual body types and preferences. In the future, we plan to substantially expand FAAST's action lexicon, provide support for recording and training custom gestures, and incorporate real-time head tracking using computer vision techniques.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"13 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133042233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nikhil Shetty, Aashish Chaudhary, D. Coming, W. Sherman, P. O’leary, E. Whiting, S. Su
The availability of low-cost virtual reality (VR) systems coupled with a growing population of researchers accustomed to newer interface styles makes this a ripe time to help domain science researchers cross the bridge to utilizing immersive interfaces. The logical next step is for scientists, engineers, doctors, etc. to incorporate immersive visualization into their exploration and analysis workflows. However, from past experience, we know having access to equipment is not sufficient. There are also several software hurdles to overcome. Obstacles must be lowered to provide scientists, engineers, and medical professionals low-risk means of exploring technologies beyond their desktops.
{"title":"Immersive ParaView: A community-based, immersive, universal scientific visualization application","authors":"Nikhil Shetty, Aashish Chaudhary, D. Coming, W. Sherman, P. O’leary, E. Whiting, S. Su","doi":"10.1109/VR.2011.5759487","DOIUrl":"https://doi.org/10.1109/VR.2011.5759487","url":null,"abstract":"The availability of low-cost virtual reality (VR) systems coupled with a growing population of researchers accustomed to newer interface styles makes this a ripe time to help domain science researchers cross the bridge to utilizing immersive interfaces. The logical next step is for scientists, engineers, doctors, etc. to incorporate immersive visualization into their exploration and analysis workflows. However, from past experience, we know having access to equipment is not sufficient. There are also several software hurdles to overcome. Obstacles must be lowered to provide scientists, engineers, and medical professionals low-risk means of exploring technologies beyond their desktops.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"61 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134161872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The importance of haptic feedback is recognized by an increasing number of researchers in the virtual reality field. Recently, a volume-based haptic feedback approach has emerged. The approach samples the intersection volume between objects through the three axes in 3D space to render an accurate interaction force among objects. This paper presents a method to reduce the complexity of the volume-based force feedback algorithm. The core of the proposed algorithm is sampling the intersection volume between two objects only once rather than three times. For the other two axes, a penetration pair reconstruction algorithm is developed to generate the required information from the sampled result. Experimental results demonstrate that the proposed approach can increase the frame rate of the volumetric haptic feedback algorithm by a factor of over two. The resulting force error is modest compared to the original volume-based haptic feedback. This proposed algorithm may also be applied to accelerate other volume-based applications, e.g. volume based force interaction between colliding deformable objects in virtual reality simulation. Moreover, the algorithm requires no pre-processing, and is thus well suited for simulations where object topology is constantly changing, i.e. cutting, melting or deforming processes.
{"title":"On accelerating a volume-based haptic feedback algorithm","authors":"Rui Hu, K. Barner, K. Steiner","doi":"10.1109/VR.2011.5759474","DOIUrl":"https://doi.org/10.1109/VR.2011.5759474","url":null,"abstract":"The importance of haptic feedback is recognized by an increasing number of researchers in the virtual reality field. Recently, a volume-based haptic feedback approach has emerged. The approach samples the intersection volume between objects through the three axes in 3D space to render an accurate interaction force among objects. This paper presents a method to reduce the complexity of the volume-based force feedback algorithm. The core of the proposed algorithm is sampling the intersection volume between two objects only once rather than three times. For the other two axes, a penetration pair reconstruction algorithm is developed to generate the required information from the sampled result. Experimental results demonstrate that the proposed approach can increase the frame rate of the volumetric haptic feedback algorithm by a factor of over two. The resulting force error is modest compared to the original volume-based haptic feedback. This proposed algorithm may also be applied to accelerate other volume-based applications, e.g. volume based force interaction between colliding deformable objects in virtual reality simulation. Moreover, the algorithm requires no pre-processing, and is thus well suited for simulations where object topology is constantly changing, i.e. cutting, melting or deforming processes.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124897920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}